Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Apr 27, 2007

First DARPA prosthetic limb comes with virtual reality training

From KurzweilAI.net

Researchers at Johns Hopkins University Applied Physics Laboratory has developed a prototype of the first fully integrated prosthetic arm that can be controlled naturally, provide sensory feedback, and allow for eight degrees of freedom

Link

 

 

Apr 20, 2007

A virtual reality environment for designing and fitting neural prosthetic limbs

A virtual reality environment for designing and fitting neural prosthetic limbs.

IEEE Trans Neural Syst Rehabil Eng. 2007 Mar;15(1):9-15

Authors: Hauschild M, Davoodi R, Loeb GE

Building and testing novel prosthetic limbs and control algorithms for functional electrical stimulation (FES) is expensive and risky. Here, we describe a virtual reality environment (VRE) to facilitate and accelerate the development of novel systems. In the VRE, subjects/patients can operate a simulated limb to interact with virtual objects. Realistic models of all relevant musculoskeletal and mechatronic components allow the development of entire prosthetic systems in VR before introducing them to the patient. The system is used both by engineers as a development tool and by clinicians to fit prosthetic devices to patients.

Apr 15, 2007

Virtual Reality Forever

Re-blogged from KurzweilAI.net

The University of Illinois at Chicago and the University of Central Florida plan to combine AI, advanced graphics and video game-type technology to enable creation of historical archives of people.

The UIC's Electronic Visualization Laboratory will build a state-of-the-art motion-capture studio to digitize the image and movements of real people, who will go on to live a virtual eternity in virtual reality. Knowledge will be archived into databases. Voices will be analyzed to create synthesized but natural-sounding "virtual" voices. Mannerisms will be studied and used in creating the 3-D avatars.

The team hopes to create virtual people who respond with a high degree of recognition to different voices and the various ways questions are phrased . 

Apr 01, 2007

VR 2.0 & gesture recognition

Via Pasta & Vinegar

An Intel Chief Technology Officer predicts that within five years we “could use gesture recognition to get rid of the remote control” and “drive demand for its important new generation of semiconductors, the superprocessors known as teraflop chips, which Intel previewed in February”

virtual reality 1.0 was a bust. The hype was too loud, computers were too slow, networking was too complicated, and because of motion-sickness issues that were never quite resolved, the whole VR experience was, frankly, somewhat nauseating.
(…)
VR 2.0, enhanced by motion capture, is different in many critical ways. Most important, the first batch of applications, such as the Wii, while still primitive, are easy to use, inexpensive, and hard to crash. You don’t get anything close to a fully sense-surround experience, but neither do you feel sick after you put down the wand. The games are simple and intuitive
(…)
system enables a presenter to take audiences on a tour of a 3D architectural design or on a fly-through of a model city. And the presenter’s measured theatrics make a big impression. “Everyone’s looking for the new, sexy way to communicate with their employees and their clients. We’re selling their ability to sell,”

Mar 16, 2007

Egocentric depth judgments in optical, see-through augmented reality

Egocentric depth judgments in optical, see-through augmented reality.

IEEE Trans Vis Comput Graph. 2007 May-Jun;13(3):429-42

Authors: Swan Ii JE, Jones A, Kolstad E, Livingston MA, Smallman HS

Abstract-A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the x-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at sim23 meters, as well as a quantification of how much more difficult the x-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to scan the ground plane in a near-to-far direction, as explanations for the observed depth underestimation.

VR and museums: call for papers

Via Networked Performance

imagesCAQI2ELG.jpg

 

 

Deadline: Friday April 27, 2007 :: Contributions are welcomed for a new book addressing the construction and interpretation of virtual artefacts within virtual world museums and within physical museum spaces. Particular emphasis is placed on theories of spatiality and strategies of interpretation.

The editors seek papers that intervene in critical discourses surrounding virtual reality and virtual artefacts, to explore the rapidly changing temporal, spatial and theoretical boundaries of contemporary museum display practice. We are especially interested in spatiality as it is employed in the construction of virtual artefacts, as well as the roles these spaces enact as signifiers of historical narrative and sites of social interaction.

We are also interested in the relationship between real-world museums and virtual world museums, with a view to interrogating the construction of meaning within, across and between both. We welcome original scholarly contributions on the topic of new cultural practices and communities related to virtual reality in the context of museum display practice. Papers might address, but are in no way limited to, the following:

* Authenticity and artificiality
* Exploration and discovery
* Physical vs virtual
* Representation/interpretation of virtual reality artefacts - as 3D spaces on screen or in a physical gallery
* Museum visiting in virtual space
* Representation of physical museum spaces in virtual worlds and their relationship to cultural definitions of museum spaces.

Please send a proposal of 500-750 words and a contributor's bio by Friday
April 27, 2007. Authors will be notified by Thursday May 31, 2007. Final drafts of papers are due by Monday October 1, 2007.

Please send your proposal to:

Tara Chittenden
Room 201
Strategic Research Unit
113 Chancery Lane
London WC2A 1PL

Or via email: tara.chittenden[at]lawsociety.org.uk

Mediamatic workshop

Via Networked Performance

 

medium_14623-505-337.jpg

 

 

 

 

 

 

Mediamatic organizes a new workshop--Hybrid World Lab--in which the participants develop prototypes for hybrid world media applications. Where the virtual world and the physical world used to be quite separated realms of reality, they are quickly becoming two faces of the same hybrid coin. This workshop investigates the increasingly intimate fusion of digital and physical space from the perspective of a media maker.

The workshop is an intense process in which the participants explore the possibilities of the physical world as interface to online media: location based media, everyday objects as media interfaces, urban screens, and cultural application of RFID technology. Every morning lectures and lessons bring in new perspectives, project presentations and introductions to the hands-on workshop tools. Every afternoon the participants work on their own workshop projects. In 5 workshop days every participant will develop a prototype of a hybrid world media project, assisted by outstanding international trainers and lectures and technical assistants. The workshop closes with a public presentation in which the issues are discussed and the results are shown.

Topics: Some of the topics that will be investigated in this workshop are: Cultural application and impact of RFID technology, internet-of-things. Using RFID in combination with other kinds of sensors. Ubiquitous computing (ubicomp) and ambient intelligence: services and applications that use chips embedded in household appliances and in public space. Locative media tools, car navigation systems, GPS tools, location sensitive mobile phones. The web as interface to the physical world: geotagging and mashupswith Google Maps & Google Earth. Games in hybrid space.

Mar 15, 2007

A Massively Shared Virtual World: Solipsis

Via Networked Performance

solipsis.jpg


 

Solipsis is a pure peer-to-peer system for a massively shared virtual world. There are no central servers at all: it only relies on end-users' machines. Solipsis is a public virtual territory. The world is initially empty and only users will fill it by creating and running entities. No pre-existing cities, inhabitants nor scenario to respect...Solipsis is open-source, so everybody can enhance the protocols and the algorithms. Moreover, the system architecture clearly separates the different tasks, so that peer-to-peer hackers as well as multimedia geeks can find a good place to have fun here! Current versions of Solipsis give the opportunity to act as pionneers in a pre-cambrian world. You only have a 2D representation of the virtual world and some basic tools devoted to communications and interactions

Impaired Short-term Motor Learning in Multiple Sclerosis: Evidence From Virtual Reality

Impaired Short-term Motor Learning in Multiple Sclerosis: Evidence From Virtual Reality.

Neurorehabil Neural Repair. 2007 Mar 9;

Authors: Leocani L, Comi E, Annovazzi P, Rovaris M, Rossi P, Cursi M, Comola M, Martinelli V, Comi G

OBJECTIVE: . Virtual reality (VR) has been proposed as a potentially useful tool for motor assessment and rehabilitation. The objective of this study was to investigate the usefulness of VR in the assessment of short-term motor learning in multiple sclerosis (MS). METHODS: . Twelve right-handed MS patients and 12 control individuals performed a motor-tracking task with their right upper limb, following the trajectory of an object projected on a screen along with online visual feedback on hand position from a sensor on the index finger. A pretraining test (3 trials), a training phase (12 trials), and a posttraining test (3 trials) were administered. Distances between performed and required trajectory were computed. RESULTS: . Both groups performed worse in depth planes compared to the frontal (x,z) plane (P <.006). MS patients performed worse than control individuals in the frontal plane at both evaluations (P <.015), whereas they had lower percent posttraining improvement in the depth planes only (P =.03). CONCLUSIONS: . The authors' VR system detected impaired motor learning in MS patients, especially for task features requiring a complex integration of sensory information (movement in the depth planes). These findings stress the need for careful customization of rehabilitation strategies, which must take into account the patients' motor, sensory, and cognitive limitations.

Augmented reality on cell phones

From Technology Review

Nokia wants to superimpose digital information on the real world using a smart cell phone.

A prototype uses a GPS sensor, a compass, and accelerometers. Using data from these sensors, the phone can calculate the location of just about any object its camera is aimed at:

 

Last October, a team led by Markus Kähäri unveiled a proto­type of the system at the International Symposium on Mixed and Augmented Reality. The team added a GPS sensor, a compass, and accelerometers to a Nokia smart phone. Using data from these sensors, the phone can calculate the location of just about any object its camera is aimed at. Each time the phone changes location, it retrieves the names and geographical coördinates of nearby landmarks from an external database. The user can then download additional information about a chosen location from the Web--say, the names of businesses in the Empire State Building, the cost of visiting the building's observatories, or hours and menus for its five eateries.



Read Original Article

Mar 10, 2007

Performance on a virtual reality spatial memory navigation task in depressed patients

Performance on a virtual reality spatial memory navigation task in depressed patients.

Am J Psychiatry. 2007 Mar;164(3):516-9

Authors: Gould NF, Holmes MK, Fantie BD, Luckenbaugh DA, Pine DS, Gould TD, Burgess N, Manji HK, Zarate CA

OBJECTIVE: Findings on spatial memory in depression have been inconsistent. A navigation task based on virtual reality may provide a more sensitive and consistent measure of the hippocampal-related spatial memory deficits associated with depression. METHOD: Performance on a novel virtual reality navigation task and a traditional measure of spatial memory was assessed in 30 depressed patients (unipolar and bipolar) and 19 normal comparison subjects. RESULTS: Depressed patients performed significantly worse than comparison subjects on the virtual reality task, as assessed by the number of locations found in the virtual town. Between-group differences were not detected on the traditional measure. The navigation task showed high test-retest reliability. CONCLUSIONS: Depressed patients performed worse than healthy subjects on a novel spatial memory task. Virtual reality navigation may provide a consistent, sensitive measure of cognitive deficits in patients with affective disorders, representing a mechanism to study a putative endophenotype for hippocampal function.

Mar 03, 2007

Flying with disability in Second Life

Via Mattew Lombard

(From Eureka Street magazine ("a publication on public affairs, the arts and theology")

The virtual world Second Life has had a lot of bad press recently in Australia that has focused on the narcissistic and unprincipled behaviour of some of its inhabitants. Nearly six million people have joined Linden Lab´s Second Life since it went public in 2003 and there are currently 1.75 million 'active' members who have logged on in the last two months.As a 3D virtual world, everything that exists in this virtual world-objects, buildings, clothes, land-has been created by the residents. Amid all the bad press, it is sometimes overlooked that Second Life also offers a very positive experience to people, especially with regard to understanding disabilities and offering opportunities to those with disabilities.As a student Niels Schuddeboom travelled to Australia and was a reporter in Sydney for the 2000 Paralympic Games.

Based in the university city of Utrecht in the Netherlands, he is confined to a wheelchair and was forced to drop out of his media course due to an uncompromising academic regime that was unable to work around his physical disabilities.Known as Niles Sopor in Second Life, Niels has found an opportunity to forget his disability and experience walking life through his avatar. "Perhaps the most profound difference I have experienced is that people have treated me differently" he said. "In real life, due to my wheelchair and lack of physical coordination, people often regard me as intellectually as well as physically disabled."In the Netherlands it is unusual for people with physical disabilities to have jobs and there is a culture of protecting them from many aspects of life.

Second Life has offered Niels the opportunity to break the mould. He runs his own company as a consultant on communications and new media.Some companies are now using Second Life to experiment with alternative marketing campaigns. As well as offering commercial opportunities, Second Life has also provided Niels with the tools to express himself in artistic ways denied him in real life. He has, for example, been able to hold a camera in Second Life and take photos and make short movies. Australian David Wallace, a quadriplegic who works as an IT coordinator at the South Australian Disability Information and Resource Centre in Adelaide has also found an outlet for his artistic side in Second Life. He recently held an exhibition of his Second Life art at the building that Illinois-based Bradley University have established on Information Island. Unlike Niels, David wanted to buy a wheelchair when he first entered Second Life and couldn´t find one! He has tried to build one in Second Life but has only had limited success (...)

 

Read the full story

 

Feb 25, 2007

Seconde Life on cell phones

Via textually.org

comverse-remote-gaming.jpg

 

 

 

 

 

 

Second Life Reuters reports that Comverse Technology has developed an application that runs Second Life on Java-enabled mobile phones. The platform includes also a software that allows integrated SMS and instant messaging and the streaming of mobile video within SL.

 

The size-weight illusion in natural and virtual reality

Seeing size and feeling weight: the size-weight illusion in natural and virtual reality.

Hum Factors. 2007 Feb;49(1):136-44

Authors: Heineken E, Schulte FP

OBJECTIVE: We experimentally tested the degree that the size-weight illusion depends on perceptual conditions allowing the observer to assume that both the visual and the kinesthetic stimuli of a weight seen and lifted emanate from the same object. We expected that the degree of the illusion depended on the "realism" provided by different kinds of virtual reality (VR) used when the weights are seen in virtual reality and at the same time lifted in natural reality. BACKGROUND: Welch and Warren (1980) reported that an intermodal influence can be expected only if perceptual information of different modalities is compellingly related to only one object. METHOD: Objects of different sizes and weights were presented to 50 participants in natural reality or in four virtual realities: two immersive head-mounted display VRs (with or without head tracking) and two nonimmersive desktop VRs (with or without screening from input of the natural environment using a visor). The objects' heaviness was scaled using the magnitude estimation method. RESULTS: Data show that the degree of the illusion is largest in immersive and lowest in nonimmersive virtual realities. CONCLUSION: The higher the degree of the illusion is, the more compelling the situation is perceived and the more the observed data are in correspondence with the data predicted for the illusion in natural reality. This shows that the kind of mediating technology used strongly influences the presence experienced. APPLICATION: The size-weight illusion's sensitivity to conditions that affect the sense of presence makes it a promising objective presence measure.

Jan 27, 2007

An Internet virtual world chat room for smoking cessation

Evaluation of an Internet virtual world chat room for adolescent smoking cessation.

Addict Behav. 2006 Dec 19;

Authors: Woodruff SI, Conway TL, Edwards CC, Elliott SP, Crittenden J

The goal of this longitudinal study was to test an innovative approach to smoking cessation that might be particularly attractive to adolescent smokers. The study was a participatory research effort between academic and school partners. The intervention used an Internet-based, virtual reality world combined with motivational interviewing conducted in real-time by a smoking cessation counselor. Participants were 136 adolescent smokers recruited from high schools randomized to the intervention or a measurement-only control condition. Those who participated in the program were significantly more likely than controls to report at the immediate post-intervention assessment that they had abstained from smoking during the past week (p</=.01), smoked fewer days in the past week (p</=.001), smoked fewer cigarettes in the past week (p</=.01), and considered themselves a former smoke (p</=.05). Only the number of times quit was statistically significant at a one-year follow-up assessment (p</=.05). The lack of longer-term results is discussed, as are methodological challenges in conducting a cluster-randomized smoking cessation study.

Jan 25, 2007

Second Life gets virtual mobile operator

From Textually.org

builds_courtyard-1.jpg

 

 

 

 

Vodafone is planning to launch itself as a mobile operator in the game Second Life alongside its Vodafone Island area within the popular virtual world, reports TechDigest

"Second Life users will be able to use branded handsets to call each other within the world, as well as send text messages."

Second Life Cell Phones can SMS real world Phones

 

Jan 24, 2007

Robotics and virtual reality

Robotics and virtual reality: a perfect marriage for motor control research and rehabilitation.

Assist Technol. 2006;18(2):181-95

Authors: Patton J, Dawe G, Scharver C, Mussa-Ivaldi F, Kenyon R

This article's goal is to outline the motivations, progress, and future objectives for the development of a state-of-the-art device that allows humans to visualize and feel synthetic objects superimposed on the physical world. The programming flexibility of these devices allows for a variety of scientific questions to be answered in psychology, neurophysiology, rehabilitation, haptics, and automatic control. The benefits are most probable in rehabilitation of brain-injured patients, for whom the costs are high, therapist time is limited, and repetitive practice of movements has been shown to be beneficial. Moreover, beyond simple therapy that guides, strengthens, or stretches, the technology affords a variety of exciting potential techniques that can combine our knowledge of the nervous system with the tireless, precise, and swift capabilities of a robot. Because this is a prototype, the system will also guide new experimental methods by probing the levels of quality that are necessary for future design cycles and related technology. Very important to the project is the early and intimate involvement of therapists and other clinicians in the design of software and its user interface. Inevitably, it should also lead the way to new modes of practice and to the commercialization of haptic/graphic systems.

Jan 22, 2007

Location-tracker in Second Life

Via 3DPoint

 

SLStats comes in the form of a wristwatch, available in Hill Valley Square [in SL] in the Huin sim. Once you register with the service in-world, the watch “watches” where you go, tracking your location as you move around the world, as well as which other avatars you come into contact with. The information is used on the SLStats site to rank most popular regions (among SLStats users, of course), and to track how much time you’ve spent in-world, which you can view at a link like this one, which tracks Glitchy: http://slstats.com/users/view/Glitchy+Gumshoe.
 

Critical illness VR rehabilitation device

Critical illness VR rehabilitation device (X-VR-D): Evaluation of the potential use for early clinical rehabilitation.

J Electromyogr Kinesiol. 2007 Jan 11;

Authors: Van de Meent H, Baken BC, Van Opstal S, Hogendoorn P

We present a new critical illness VR rehabilitation device (X-VR-D) that enables diversified self-training and is applicable early in the rehabilitation of severely injured or ill patients. The X-VR-D consists of a VR program delivering a virtual scene on a flat screen and simultaneously processing commands to a moving chair mounted on a motion system. Sitting in the moving chair and exposed to a virtual reality environment the device evokes anticipatory and reactive muscle contractions in trunk and extremities for postural control. In this study we tested the device in 10 healthy subjects to evaluate whether the enforced perturbations indeed evoke sufficient and reproducible EMG muscle activations. We found that particular fast roll and pitch movements evoke adequate trunk and leg muscle activity. Higher angular velocities and higher angles of inclination elicited broader EMG bursts and larger amplitudes. The muscle activation pattern was highly consistent between different subjects and although we found some habituation of EMG responses in consecutive training sessions, the general pattern was maintained and was predictable for specific movements. The habituation was characterized by more efficient muscle contractions and better muscle relaxation during the rest positions of the device. Furthermore we found that the addition of a virtual environment to the training session evoked more preparatory and anticipatory muscle activation than sessions without a virtual environment. We conclude that the X-VR-D is safe and effective to elicit consistent and reproducible muscle activity in trunk and leg muscles in healthy subjects and thus can be used as a training method.

Jan 15, 2007

Stereo and motion parallax cues in human 3D vision

Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

J Vis. 2006;6(12):1471-85

Authors: Rauschecker AM, Solomon SG, Glennerster A

In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.